47 research outputs found

    Computer-Based Cognitive Training Improves Brain Functional Connectivity in the Attentional Networks: A Study With Primary School-Aged Children

    Get PDF
    We have shown that a computer-based program that trains schoolchildren in cognitive tasks that mainly tap working memory (WM), implemented by teachers and integrated into school routine, improved cognitive and academic skills compared with an active control group. Concretely, improvements were observed in inhibition skills, non-verbal IQ, mathematics and reading skills. Here, we focus on a subsample from the overarching study who volunteered to be scanned using a resting state fMRI protocol before and 6-month after training. This sample reproduced the aforementioned behavioral effects, and brain functional connectivity changes were observed within the attentional networks (ATN), linked to improvements in inhibitory control. Findings showed stronger relationships between inhibitory control scores and functional connectivity in a right middle frontal gyrus (MFG) cluster in trained children compared to children from the control group. Seed-based analyses revealed that connectivity between the r-MFG and homolateral parietal and superior temporal areas were more strongly related to inhibitory control in trained children compared to the control group. These findings highlight the relevance of computer-based cognitive training, integrated in real-life school environments, in boosting cognitive/academic performance and brain functional connectivity

    Accelerating fibre orientation estimation from diffusion weighted magnetic resonance imaging using GPUs

    Get PDF
    With the performance of central processing units (CPUs) having effectively reached a limit, parallel processing offers an alternative for applications with high computational demands. Modern graphics processing units (GPUs) are massively parallel processors that can execute simultaneously thousands of light-weight processes. In this study, we propose and implement a parallel GPU-based design of a popular method that is used for the analysis of brain magnetic resonance imaging (MRI). More specifically, we are concerned with a model-based approach for extracting tissue structural information from diffusion-weighted (DW) MRI data. DW-MRI offers, through tractography approaches, the only way to study brain structural connectivity, non-invasively and in-vivo. We parallelise the Bayesian inference framework for the ball & stick model, as it is implemented in the tractography toolbox of the popular FSL software package (University of Oxford). For our implementation, we utilise the Compute Unified Device Architecture (CUDA) programming model. We show that the parameter estimation, performed through Markov Chain Monte Carlo (MCMC), is accelerated by at least two orders of magnitude, when comparing a single GPU with the respective sequential single-core CPU version. We also illustrate similar speed-up factors (up to 120x) when comparing a multi-GPU with a multi-CPU implementation

    Observation of Point-Light-Walker Locomotion Induces Motor Resonance When Explicitly Represented; An EEG Source Analysis Study

    No full text
    Understanding human motion, to infer the goal of others' actions, is thought to involve the observer's motor repertoire. One prominent class of actions, the human locomotion, has been object of several studies, all focused on manipulating the shape of degraded human figures like point-light walker (PLW) stimuli, represented as walking on the spot. Nevertheless, since the main goal of the locomotor function is to displace the whole body from one position to the other, these stimuli might not fully represent a goal-directed action and thus might not be able to induce the same motor resonance mechanism expected when observing a natural locomotion. To explore this hypothesis, we recorded the event-related potentials (ERP) of canonical/scrambled and translating/centered PLWs decoding. We individuated a novel ERP component (N2c) over central electrodes, around 435 ms after stimulus onset, for translating compared to centered PLW, only when the canonical shape was preserved. Consistently with our hypothesis, sources analysis associated this component to the activation of trunk and lower legs primary sensory-motor and supplementary motor areas. These results confirm the role of own motor repertoire in processing human action and suggest that ERP can detect the associated motor resonance only when the human figure is explicitly involved in performing a meaningful action

    Observation of Point-Light-Walker Locomotion Induces Motor Resonance When Explicitly Represented; An EEG Source Analysis Study

    No full text
    International audienceUnderstanding human motion, to infer the goal of others' actions, is thought to involve the observer's motor repertoire. One prominent class of actions, the human locomotion, has been object of several studies, all focused on manipulating the shape of degraded human figures like point-light walker (PLW) stimuli, represented as walking on the spot. Nevertheless, since the main goal of the locomotor function is to displace the whole body from one position to the other, these stimuli might not fully represent a goal-directed action and thus might not be able to induce the same motor resonance mechanism expected when observing a natural locomotion. To explore this hypothesis, we recorded the event-related potentials (ERP) of canonical/scrambled and translating/centered PLWs decoding. We individuated a novel ERP component (N2c) over central electrodes, around 435 ms after stimulus onset, for translating compared to centered PLW, only when the canonical shape was preserved. Consistently with our hypothesis, sources analysis associated this component to the activation of trunk and lower legs primary sensory-motor and supplementary motor areas. These results confirm the role of own motor repertoire in processing human action and suggest that ERP can detect the associated motor resonance only when the human figure is explicitly involved in performing a meaningful action

    Tactile perception during action observation

    No full text
    International audienceIt has been suggested that tactile perception becomes less acute during movement to optimize motor control and to prevent an overload of afferent information generated during action. This empirical phenomenon, known as "tactile gating effect," has been associated with mechanisms of sensory feedback prediction. However, less attention has been given to the tactile attenuation effect during the observation of an action. The aim of this study was to investigate whether and how the observation of a goal-directed action influences tactile perception as during overt action. In a first experiment, we recorded vocal reaction times (RTs) of participants to tactile stimulations during the observation of a reach-to-grasp action. The stimulations were delivered on different body parts that could be either congruent or incongruent with the observed effector (the right hand and the right leg, respectively). The tactile stimulation was contrasted with a no body-related stimulation (an auditory beep). We found increased RTs for tactile congruent stimuli compared to both tactile incongruent and auditory stimuli. This effect was reported only during the observation of the reaching phase, whereas RTs were not modulated during the grasping phase. A tactile two-alternative forced-choice (2AFC) discrimination task was then conducted in order to quantify the changes in tactile sensitivity during the observation of the same goal-directed actions. In agreement with the first experiment, the tactile perceived intensity was reduced only during the reaching phase. These results suggest that tactile processing during action observation relies on a process similar to that occurring during action execution

    Natural Translating Locomotion Modulates Cortical Activity at Action Observation

    No full text
    The present study verified if the translational component of locomotion modulated cortical activity recorded at action observation. Previous studies focusing on visual processing of biological motion mainly presented point light walker that were fixed on a spot, thus removing the net translation toward a goal that yet remains a critical feature of locomotor behavior. We hypothesized that if biological motion recognition relies on the transformation of seeing in doing and its expected sensory consequences, a significant effect of translation compared to centered displays on sensorimotor cortical activity is expected. To this aim, we explored whether EEG activity in the theta (4–8 Hz), alpha (8–12 Hz), beta 1 (14–20 Hz) and beta 2 (20–32 Hz) frequency bands exhibited selectivity as participants viewed four types of stimuli: a centered walker, a centered scrambled, a translating walker and a translating scrambled. We found higher theta synchronizations for observed stimulus with familiar shape. Higher power decreases in the beta 1 and beta 2 bands, indicating a stronger motor resonance was elicited by translating compared to centered stimuli. Finally, beta bands modulation in Superior Parietal areas showed that the translational component of locomotion induced greater motor resonance than human shape. Using a Multinomial Logistic Regression classifier we found that Dorsal-Parietal and Inferior-Frontal regions of interest (ROIs), constituting the core of action-observation system, were the only areas capable to discriminate all the four conditions, as reflected by beta activities. Our findings suggest that the embodiment elicited by an observed scenario is strongly mediated by horizontal body displacement

    Neural dynamics of between-speaker convergence during speech conversation: a dual-EEG study

    No full text
    International audienceWhen two people engage in a conversation, their speech tend to converge towards each other. Little is known about the neural mechanisms contributing to this phenomenon. We used the Word Domino task where two speakers take turns in chaining bi-syllabic words according to a rhyming rule: the first syllable has to rhyme with the last syllable of the previous word. We developed a robust automatic method to extract quantitative indexes of convergence at the single word pair, with minimal a-priori hypothesis, from Mel-frequency cepstral coefficients (MFCCs). A data driven, text independent, automatic speaker identification technique, based on GMM-UBM (Gaussian Mixture Model-Universal Background Model) was used to extract convergence. The Gaussian components model the underlying broad phonetic features that characterize a speaker's voice. Each speaker dependent models was tested on speech produced by their conversational partners. Model goodness of fit define the convergence index. Dual-EEG (Electroencephalography) was recorded to search for the neural underpinnings of speech convergence. This was done by splitting the neural signals of both the speaker's (-500ms -- 0ms before speech onset) and listener's (0ms -- 500ms during listing) brain into convergent and non-convergent epochs. Convergence induced significant desynchronization in the speaking brain in the 12-14 Hz band -270ms to -190ms before speaking onset, in a fronto-central anterior scalp region (F5 FC5 FT7 F7). The listening brain showed desynchronization in the high-beta band (24-29 Hz; 32-34 Hz) with a left (F3 F5 F7 FC5 FC3 C3 C5) and right (AF8 F6 F8 FT8 FC6) fronto-central scalp topography between 50ms and 100ms after listening onset. This results demonstrate that low and high beta oscillatory dynamics contribute to the emergence of speech convergence. Topography suggests the involvement of partially lateralized fronto-central regions possibly including Broca's area. Our hyperscanning approach offers insight on the inter-brain neural dynamics at play in natural conversation

    The neural oscillatory markers of phonetic convergence during verbal interaction

    No full text
    During a conversation, the neural processes supporting speech production and perception overlap in time and, based on context, expectations and the dynamics of interaction, they are also continuously modulated in real time. Recently, the growing interest in the neural dynamics underlying interactive tasks, in particular in the language domain, has mainly tackled the temporal aspects of turn-taking in dialogs. Besides temporal coordination, an under-investigated phenomenon is the implicit convergence of the speakers toward a shared phonetic space. Here, we used dual electroencephalography (dual-EEG) to record brain signals from subjects involved in a relatively constrained interactive task where they were asked to take turns in chaining words according to a phonetic rhyming rule. We quantified participants' initial phonetic fingerprints and tracked their phonetic convergence during the interaction via a robust and automatic speaker verification technique. Results show that phonetic convergence is associated to left frontal alpha/low-beta desynchronization during speech preparation and by high-beta suppression before and during listening to speech in right centro-parietal and left frontal sectors, respectively. By this work, we provide evidence that mutual adaptation of speech phonetic targets, correlates with specific alpha and beta oscillatory dynamics. Alpha and beta oscillatory dynamics may index the coordination of the “when” as well as the “how” speech interaction takes place, reinforcing the suggestion that perception and production processes are highly interdependent and co-constructed during a conversation
    corecore